Shared Generative Latent Representation Learning for Multi-View Clustering
نویسندگان
چکیده
منابع مشابه
Shared Subspace Learning for Latent Representation of Multi-View Data
The pervasive existence of multi-view data has made conventional single view data analysis methods to confront with great challenge. To exploit new analysis technique for multi-view data has become one of active topics in the field of machine learning. From the point of shared subspace learning, this paper focuses on capturing the shared latent representation across multi-view by constructing t...
متن کاملScalable Representation and Learning for 3D Object Recognition Using Shared Feature-Based View Clustering
In this paper, we present a new scalable 3D object representation and learning method to recognize many objects. Scalability is one of the important issues in object recognition to reduce memory and recognition time. The key idea of scalable representation is to combine a feature sharing concept with view clustering in part-based object representation (especially a CFCM: common frame constellat...
متن کاملMulti-view Self-Paced Learning for Clustering
Exploiting the information from multiple views can improve clustering accuracy. However, most existing multi-view clustering algorithms are nonconvex and are thus prone to becoming stuck into bad local minima, especially when there are outliers and missing data. To overcome this problem, we present a new multi-view self-paced learning (MSPL) algorithm for clustering, that learns the multi-view ...
متن کاملDistributed Multi-Task Learning with Shared Representation
We study the problem of distributed multitask learning with shared representation, where each machine aims to learn a separate, but related, task in an unknown shared low-dimensional subspaces, i.e. when the predictor matrix has low rank. We consider a setting where each task is handled by a different machine, with samples for the task available locally on the machine, and study communication-e...
متن کاملMulti-Conditional Learning: Generative/Discriminative Training for Clustering and Classification
This paper presents multi-conditional learning (MCL), a training criterion based on a product of multiple conditional likelihoods. When combining the traditional conditional probability of “label given input” with a generative probability of “input given label” the later acts as a surprisingly effective regularizer. When applied to models with latent variables, MCL combines the structure-discov...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Proceedings of the AAAI Conference on Artificial Intelligence
سال: 2020
ISSN: 2374-3468,2159-5399
DOI: 10.1609/aaai.v34i04.6146